Goto

Collaborating Authors

 privacy assistant


PEAK: Explainable Privacy Assistant through Automated Knowledge Extraction

Ayci, Gonul, Özgür, Arzucan, Şensoy, Murat, Yolum, Pınar

arXiv.org Artificial Intelligence

In the realm of online privacy, privacy assistants play a pivotal role in empowering users to manage their privacy effectively. Although recent studies have shown promising progress in tackling tasks such as privacy violation detection and personalized privacy recommendations, a crucial aspect for widespread user adoption is the capability of these systems to provide explanations for their decision-making processes. This paper presents a privacy assistant for generating explanations for privacy decisions. The privacy assistant focuses on discovering latent topics, identifying explanation categories, establishing explanation schemes, and generating automated explanations. The generated explanations can be used by users to understand the recommendations of the privacy assistant. Our user study of real-world privacy dataset of images shows that users find the generated explanations useful and easy to understand. Additionally, the generated explanations can be used by privacy assistants themselves to improve their decision-making. We show how this can be realized by incorporating the generated explanations into a state-of-the-art privacy assistant.


A Survey on Understanding and Representing Privacy Requirements in the Internet-of-Things

Ogunniye, Gideon (a:1:{s:5:"en_US";s:23:"University of Edinburgh";}) | Kokciyan, Nadin (University of Edinburgh)

Journal of Artificial Intelligence Research

People are interacting with online systems all the time. In order to use the services being provided, they give consent for their data to be collected. This approach requires too much human effort and is impractical for systems like Internet-of-Things (IoT) where human-device interactions can be large. Ideally, privacy assistants can help humans make privacy decisions while working in collaboration with them. In our work, we focus on the identification and representation of privacy requirements in IoT to help privacy assistants better understand their environment. In recent years, more focus has been on the technical aspects of privacy. However, the dynamic nature of privacy also requires a representation of social aspects (e.g., social trust). In this survey paper, we review the privacy requirements represented in existing IoT ontologies. We discuss how to extend these ontologies with new requirements to better capture privacy, and we introduce case studies to demonstrate the applicability of the novel requirements.


Artificial Intelligence, Ratings, and the Small Print

#artificialintelligence

People have always balked at reading terms of service -- the acres of fine print on the bottom of insurance policies and product agreements and in pop-ups on apps and websites. It's so much easier and quicker to click "I agree" than to wade through hours of boring legalese. A 2016 academic study found that 98 percent of people signed up for a fictitious free Wi-Fi service, NameDrop, even though clause 2.3.1 of its terms states: "By agreeing to these Terms of Service, and in exchange for service, all users of this site agree to immediately assign their first-born child to NameDrop, Inc." In this age of big data, AI, and machine learning, there must be a better way for companies to present -- and for consumers to manage -- the small print. A sense of urgency to develop such systems is rising.


This app could help you regain control of your data

#artificialintelligence

Keeping track of the personal data your mobile apps are collecting, using, and sharing requires making sense of long, ambiguous, and often confusing privacy policies and permission settings. Privacy Assistant, a creation of researchers at Carnegie Mellon University, uses machine learning to give users more control over the information that the apps on their Android phones collect. It combines a user's answers to several questions (for example, "In general, do you feel comfortable with finance apps accessing your location?") The app is available only for rooted devices, meaning their operating systems have been unlocked to allow for unapproved apps. But Norman Sadeh, a computer science professor who leads CMU's Personalized Privacy Assistant Project, hopes that a major tech company will eventually see the technology as a way to differentiate itself from its competitors, and pave the way for it to become a mainstream tool.


A.I. Privacy Assistants Could Stop You From Exposing Sensitive Info

#artificialintelligence

As the hundreds of people who have publicly posted pictures of their debit cards on Twitter can attest, it's often easy to unwittingly expose private information in the age of social media. But what if a friendly automated assistant, similar to Siri or Alexa, warned you before you share sensitive images, potentially mitigating threats like online stalking and identity theft? That's the idea behind a recent study from researchers at the Max Planck Institute for Informatics in Germany, who say they've built an AI-powered privacy watchdog that can learn a person's privacy preferences and caution them whenever private information might be exposed in the pictures they post to social media. "Our model is trained to predict the user specific privacy risk and even outperforms the judgment of the users, who often fail to follow their own privacy preferences," the researchers write in a recent paper, which awaits peer review. "In fact -- as our study shows -- people frequently misjudge the privacy relevant information content in an image -- which leads to failure of enforcing their own privacy preferences."


New smartphone app can manage your privacy preferences - Artificial Intelligence Online

#artificialintelligence

Researchers are developing a personalised privacy assistant app that can simplify the task of setting permissions for your smartphone applications. That is a job that requires well over a hundred decisions, an unmanageable number for the typical user, researchers from Carnegie Mellon University (CMU) in the US said. The privacy assistant can learn the user's preferences and quickly recommend the most appropriate settings, such as with which app to share the user's location, or contact list. In the field test, people accepted almost 80 per cent of the recommendations made by the privacy assistant and, at the end of the study, these people indicated they were more comfortable with their privacy settings than users who did not have a privacy assistant, researchers said. "It is clear that people just cannot cope with the complexities of privacy settings associated with the apps they have on their smartphones," said Norman Sadeh from CMU.